Lab 05 - Camera: Cones classification

Robotics II

Poznan University of Technology, Institute of Robotics and Machine Intelligence

Laboratory 5: Cones classification using Camera sensor

Back to the course table of contents

In this course, you will train a classification algorithm as the next stage of object detection. Your classes will be in three colours: orange, blue and yellow cones.

Part I - train the cone classificator

First, follow the interactive tutorial. It uses the SqueezeNet model for object classification. Remember to save exported onnx model from the final step.

Part II - build the inference pipeline

Now, we use the detector from the previous course and the estimator from Part I to perform inference on real video from the onboard autonomous racecar.

Publishing video from file to ROS topic

Cone classification inference

Note: Currently, your workspace is the vision_detector.py file from the previous course.

roslaunch fsds_roboticsII vision_detector.launch

Simulator recording

rosbag play ./cam_acc.bag /fsds/camera/cam_right:=/fsds/camera/cam_left

TASK

Your task is to draw the text “STOP” after the finish line. Simplify, draw the text if all classes of boxes are changed from yellow/blue to orange. Tips: * use np.all to handle NumPy logical operations * use cv2.putText to add text on image

As a result, upload a screenshot at the finish line from the rviz tool to the eKursy platform.